What is a meta-analysis?

A form of evidence synthesis, which entails statistically synthesisng the results of related studies to summarise research knowledge, and generate or test new ideas with greater power than any single study.

 

In its ideal form, it sits atop the ‘evidence pyramid’

Why do we meta-analyses?

Grappling with an ever-increasing evidence base

 

Why do we do evidence synthesis?

Reconciling conflicting evidence and evaluating reliability.

(see also: calling out bs)

Why do we do evidence synthesis?

Advancing fundamental knowledge
Informing evidence-based applications & policy

Why do we do evidence synthesis?

Personal empowerment and information literacy

 

Approaches to evidence synthesis

Traditional reviews:

  • Haphazard synthesis of literature
  • Suffers from selection, discussion, and quality biases
  • Not transparent, hence less robust, reproducible

Systematic reviews/meta-analyses:

  • ‘Gold standard’
  • Exhaustive, comprehensive collation of evidence
  • Transparent, reproducible methods & analyses
  • Akin to empirical research process

Approaches to evidence synthesis

  • Systematic review or map
  • Comprehensive map and/or critical analysis an evidence base
  • Scope: broader
  • E.g. What are the effects of land protection on human wellbeing?

 

  • Meta-analysis
  • Statistical analysis of the results of individual studies
  • Scope: narrower
  • E.g. What are the non-target effects of neonicotinoid pesticides?
All underlain by a systematic search

The meta-analysis pipeline

1: Formulate & refine question

 

2: Design & execute search

 

3: Screen results

 

4: Extract data

 

5: Meta-analyse & interpret results

 

Formulating questions

How do we formulate questions?

  • Evidence needs often manifest as a broad, open-framed problem
  • Typical initial phase:

How do we formulate questions?

 

1: Identify broad problem

 

2: Explore and refine narrow questions

 

3: Deconstruct and assess suitability of questions for ES (P.I.C.O.S)

Deconstructing and assessing suitability

PICOS

 

PICOS is a framework to guide thinking, not a set of hard-and-fast rules. It stands for:

  • Population
  • Intervention/exposure
  • Comparator
  • Outcome
  • Study design

Deconstructing and assessing suitability

  • Population: The unit of study (e.g. ecosystem, habitat, species).
  • Intervention/exposure: In eco/evo, any independent variable of interest. Often absent in observational studies.
  • Comparator: A control, baseline, or means of comparison between groups or timepoints. Often absent in observational studies.
  • Outcome: The objectives that can be reliably measured (i.e. the dependent variables).
  • Study design: The method of data collection, particularly experimental vs observational designs.

Deconstructing and assessing suitability

What is the effect of neonicotinoid pesticides on pollinator productivity?

 

  • P:
  • I:
  • C:
  • O:
  • S:

Deconstructing and assessing suitability

What is the effect of neonicotinoid pesticides on pollinator productivity?

 

  • P: Pollinators
  • I: Neonicotinoid
  • C: Non-exposed pollinators
  • O: Productivity
  • S: Experimental

Deconstructing and assessing suitability

What is the prevalence (ppm) of neonicotinoid pesticides in irrigation runoff?

 

  • P:
  • I:
  • C:
  • O:
  • S:

Deconstructing and assessing suitability

What is the prevalence (ppm) of neonicotinoid pesticides in irrigation runoff?

 

  • P: Irrigation runoff
  • I: NA
  • C: NA
  • O: Prevalence (ppm) of neonicotinoid pesticides
  • S: Observational

 

Systematically searching

Planning a search

Where to search

Where to search

Bibliographic databases

  • Web of Science
    • Core collection
    • PubMed
    • SciELO
    • Zoological Record
  • Scopus
  • Agricola
  • CAB Abstracts
  • Biological Abstracts
  • NOT Google Scholar
mean ~7 (2-75) databases

Where to search

Grey literature

  • Preprints
    • bioRxiv
    • arXiv
    • EcoEvoRxiv
  • Google Scholar
  • Unpublished data
    • Public calls
    • Email researchers
  • Theses
    • University repositories
    • Public repositories
  • Policy documents
  • Biological Abstracts

 

Designing your search

Designing your search

Establish a test-set of studies

  • Studies which you know will be included
  • Informally retrieved from your own reading, experts, etc.
  • n = 10+? But may vary greatly.
  • Valuable for assessing general fesability & tractability of meta-analysis, and designing & validating search.

Designing your search

Building a basic search:

  • Guided by the review question & PICOS elements
  • Databases generally search for keywords in the TITLE, ABSTRACT and KEYWORDS
  • Databases often have a range of search options
  • Databases use phrasing, Boolean operators (AND, OR, NOT) and nesting differently

 

Example

How effective is wetland restoration for reducing nitrogen and phosphorus?

Designing your search

How effective is wetland restoration for reducing nitrogen and phosphorus?

 

  • P: Wetland
  • I: Restoration
  • C: Unrestored/degraded/non-existent wetlands
  • O: Nitrogen, phosphorus
  • S: Experimental, observational

Designing your search

A Boolean refresher:

  • OR terms (synonyms)
    • wetland OR bog
  • AND terms (synonyms)
    • wetland AND restoration
  • Quotation marks (exact phrases)
    • “wet meadow”
  • Wildcards (word variations)
    • phosph* [phosphate, phosphorous]
  • NOT

Designing your search

How effective is wetland restoration for reducing nitrogen and phosphorus?
Compile synonyms for PIO

(wetland* OR pond OR mire* OR marsh OR fen OR "wet meadow" OR riparian OR "flood plain" OR reed) AND (construct* OR creat* OR restor* OR man*made OR flooding OR inundation) AND (nitrogen OR phosph* OR nitrate OR TKN OR ammoni)

Designing your search

Test search terms individually for efficacy

Designing your search

Backward (cited) <—–> Forward (citing) search

 

Validating your search

Documenting your search

 

Screening and appraising studies

Screening and appraising studies

  • Lots of results! But what’s relevant?
  • Screen against predetermined inclusion criteria in two stages
    1. Title & abstract screening
    2. Full-text screening

 

Common inclusion rates

  • Title & abstract: ~52% (10-74)
  • Full text: ~29% (0-56)

Inclusion & quality criteria

A priori, justifiable criteria to minimise bias, and ensure maintain quality and transparency

Inclusion & quality criteria

  • Inclusion criteria
    • Let PICOS be your (partial) guide
    • Relevant population(s)?
    • Necessary intervention?
    • Appropriate comparator?
    • Suitable outcome?

Inclusion & quality criteria

  • Inclusion criteria
    • Let PICOS be your (partial) guide
    • Relevant population(s)?
    • Necessary intervention?
    • Appropriate comparator?
    • Suitable outcome?
  • Quality criteria (see R.O.B. and R.O.S.E.S)
    • Suitable controls?
    • Appropriate treatment assignment?
    • Minimum sample sizes?
    • Confounding variables?
    • Typically at full-text stage only

The screening process

The two-step (though can be three, if ++ records)

Title + Abstract

\/

(Text retrieval)

\/

Full text

The screening process

Keeping note:

  • Title + abstract
    • Efficiency key, often MANY reasons for exclusion
    • Include / Exclude
  • Full text
    • Need precise reasons for all studies
    • Include
    • Exclude P, I, C, O, S
    • Exclude review / no primary data / commentary / modelling study / etc

Free software to help along the way

 

Thanks!